Adjust log storage size
Adjust the size of the Calico Enterprise log storage during or after installation.
By default, Calico Enterprise creates the log storage with a single node. This makes it easy to get started using Calico Enterprise. Generally, a single node for logs is fine for test or development purposes. Before going to production, you should scale the number of nodes, replicas, CPU, and memory to reflect a production environment.
This how-to guide uses the following Calico Enterprise features:
- LogStorage resource
Log storage terms
|node||A running instance of the log storage.|
|cluster||A collection of nodes. Multiple nodes protect the cluster from any single node failing, and lets you scale resources (CPU, memory, storage space) .|
|replica||A copy of data. Replicas protect against data loss if a node fails. The number of replicas must be less than the number of nodes.|
Before you begin...
Review log storage recommendations
Review Log storage recommendations for guidance on the number of nodes and resources to configure for your environment.
If you are not using a dynamic provisioner, make sure there is an available persistent volume before updating the resource requirements (cpu, memory, storage) in this section. To check that a persistent volume has the status of
Available, run this command:
kubectl get pv | grep tigera-elasticsearch
In the following example, Calico Enterprise is configured to install 3 nodes that have 200Gi of storage each with 1 replica. Whenever the storage size is modified, resourceRequirements must be revisited respectively to support these changes.
# This section sets the resource requirements for each individual Elasticsearch node.
- componentName: ECKOperator
# This section sets the resource requirements for the operator that bootstraps the Elasticsearch cluster.