Configure storage for logs and reports
Big picture
Before installing Calico Enterprise, you must configure persistent storage for flow logs, DNS logs, audit logs, and compliance reports.
Concepts
Before configuring a storage class for Calico Enterprise, the following terms will help you understand storage interactions.
Persistent volume
Used by pods to persist storage within the cluster. Combined with persistent volume claims, pods can persist data across restarts and rescheduling.
Persistent volume claim
Used by pods to request and mount storage volumes. The claim specifies the volume requirements for the request: size, access rights, and storage class.
Dynamic provisioner
Provisions types of persistent volumes on demand. Although most managed public-cloud clusters provide a dynamic provisioner using cloud-specific storage APIs (for example, Amazon EBS or Google persistent disks), not all clusters have a dynamic provisioner.
When a pod makes a persistent volume claim from a storage class that uses a dynamic provisioner, the volume is automatically created. If the storage class does not use a dynamic provisioner (for example the local storage class), the volumes must be created in advance. For help, see the Kubernetes documentation.
Storage class
The storage provided by the cluster. Storage classes can be used with dynamic provisioners to automatically provision persistent volumes on demand, or with manually-provisioned persistent volumes. Different storage classes provide different service levels.
Before you begin...
Review log storage recommendations
Review Log storage recommendations for guidance on the number of nodes and resources to configure for your environment.
Determine storage support
Determine the storage types that are available on your cluster. If you are using dynamic provisioning, verify it is supported. If you are using local disks, you may find the sig-storage local static provisioner useful. It creates and manages PersistentVolumes by watching for disks mounted in a configured directory.
Do not use the host path storage provisioner. This provisioner is not suitable for production and results in scalability issues, instability, and data loss.
Do not use shared network file systems, such as AWS' EFS or Azure's azure-file. These file systems may result in decreases of performance and data loss.
How to
Create a storage class
Before installing Calico Enterprise, create a storage class named, tigera-elasticsearch
.
Examples
Pre-provisioned local disks
In the following example, we create a StorageClass to use when explicitly adding PersistentVolumes for local disks. This can be performed manually, or using the sig-storage local static provisioner.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tigera-elasticsearch
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
If local persistent volumes are provisioned on an SELinux-enabled host, you can use the /mnt/tigera
host path created by the Calico Enterprise policy package.
AWS EBS disks
In the following example for an AWS cloud provider integration, the StorageClass is based on how your EBS disks are provisioned:
- Amazon EBS CSI
- Legacy Kubernetes EBS driver
Make sure the CSI plugin is enabled in your cluster and apply the following manifest.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tigera-elasticsearch
provisioner: ebs.csi.aws.com
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tigera-elasticsearch
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
AKS Azure Files storage
In the following example for an AKS cloud provider integration, the StorageClass tells Calico Enterprise to use LRS disks for log storage.
Premium Storage is recommended for databases greater than 100GiB and for production installations.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tigera-elasticsearch
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
GCP Persistent Disks
In the following example for a GKE cloud provider integration, the StorageClass tells Calico Enterprise to use the GCE Persistent Disks for log storage.
There are currently two types available pd-standard
and pd-ssd
. For production deployments, we recommend using the pd-ssd
storage type.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tigera-elasticsearch
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
replication-type: none
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true