Skip to main content

Archive logs

Big picture​

Archive Calico Cloud logs to SIEMs like Syslog, Splunk, or Amazon S3 to meet compliance storage requirements.

Value​

Archiving your Calico Cloud Elasticsearch logs to storage services like Amazon S3, Syslog, or Splunk are reliable options for maintaining and consolidating your compliance data long term.

Before you begin​

Supported logs for export

  • Syslog - flow, dns, idsevents, audit
  • Amazon S3 - l7, flow, dns, runtime, audit
  • Splunk - flow, audit, dns

How to​

note

Because Calico Cloud and Kubernetes logs are integral to Calico Cloud diagnostics, there is no mechanism to tune down the verbosity. To manage log verbosity, filter logs using your SIEM.

  1. Create an AWS bucket to store your logs. You will need the bucket name, region, key, secret key, and the path in the following steps.

  2. Create a Secret in the tigera-operator namespace named log-collector-s3-credentials with the fields key-id and key-secret. Example:

     kubectl create secret generic log-collector-s3-credentials \
    --from-literal=key-id=<AWS-access-key-id> \
    --from-literal=key-secret=<AWS-secret-key> \
    -n tigera-operator
  3. Update the LogCollector resource named, tigera-secure to include an S3 section with your information noted from above. Example:

    apiVersion: operator.tigera.io/v1
    kind: LogCollector
    metadata:
    name: tigera-secure
    spec:
    additionalStores:
    s3:
    bucketName: <S3-bucket-name>
    bucketPath: <path-in-S3-bucket>
    region: <S3-bucket region>

    This can be done during installation by editing the custom-resources.yaml by applying it, or after installation by editing the resource with the command:

    kubectl edit logcollector tigera-secure