Archive logs
Big picture
Archive Calico Enterprise logs to SIEMs like Syslog, Splunk, or Amazon S3 to meet compliance storage requirements.
Value
Archiving your Calico Enterprise Elasticsearch logs to storage services like Amazon S3, Syslog, or Splunk are reliable options for maintaining and consolidating your compliance data long term.
Before you begin
Supported logs for export
- Syslog - flow, dns, idsevents, audit
- Splunk - flow, audit, dns
- Amazon S3 - l7, flow, dns, audit
How to
Because Calico Enterprise and Kubernetes logs are integral to Calico Enterprise diagnostics, there is no mechanism to tune down the verbosity. To manage log verbosity, filter logs using your SIEM.
- Amazon S3
- Syslog
- Splunk
-
Create an AWS bucket to store your logs. You will need the bucket name, region, key, secret key, and the path in the following steps.
-
Create a Secret in the
tigera-operator
namespace namedlog-collector-s3-credentials
with the fieldskey-id
andkey-secret
. Example:kubectl create secret generic log-collector-s3-credentials \
--from-literal=key-id=<AWS-access-key-id> \
--from-literal=key-secret=<AWS-secret-key> \
-n tigera-operator -
Update the LogCollector resource named,
tigera-secure
to include an S3 section with your information noted from above. Example:apiVersion: operator.tigera.io/v1
kind: LogCollector
metadata:
name: tigera-secure
spec:
additionalStores:
s3:
bucketName: <S3-bucket-name>
bucketPath: <path-in-S3-bucket>
region: <S3-bucket region>This can be done during installation by editing the custom-resources.yaml by applying it, or after installation by editing the resource with the command:
kubectl edit logcollector tigera-secure
-
Update the LogCollector resource named
tigera-secure
to include a Syslog section with your syslog information. Example:apiVersion: operator.tigera.io/v1
kind: LogCollector
metadata:
name: tigera-secure
spec:
additionalStores:
syslog:
# (Required) Syslog endpoint, in the format protocol://host:port
endpoint: tcp://1.2.3.4:514
# (Optional) If messages are being truncated set this field
packetSize: 1024
# (Required) Types of logs to forward to Syslog (must specify at least one option)
logTypes:
- Audit
- DNS
- Flows
- IDSEventsThis can be done during installation by editing the custom-resources.yaml by applying it or after installation by editing the resource with the command:
kubectl edit logcollector tigera-secure
-
You can control which types of Calico Enterprise log data you would like to send to syslog. The Syslog section contains a field called
logTypes
which allows you to list which log types you would like to include. The allowable log types are:- Audit
- DNS
- Flows
- IDSEvents
Refer to the Syslog section for more details on what data each log type represents.
noteThe log type
IDSEvents
is only supported for a cluster that has LogStorage configured. It is because intrusion detection event data is pulled from the corresponding LogStorage datastore directly.The
logTypes
field is a required, which means you must specify at least one type of log to export to syslog.
TLS configuration
-
You can enable TLS option for syslog forwarding by including the "encryption" option in the Syslog section.
apiVersion: operator.tigera.io/v1
kind: LogCollector
metadata:
name: tigera-secure
spec:
additionalStores:
syslog:
# (Required) Syslog endpoint, in the format protocol://host:port
endpoint: tcp://1.2.3.4:514
# (Optional) If messages are being truncated set this field
packetSize: 1024
# (Optional) To Configure TLS mode
encryption: TLS
# (Required) Types of logs to forward to Syslog (must specify at least one option)
logTypes:
- Audit
- DNS
- Flows
- IDSEvents -
Using the self-signed CA with the field name tls.crt, create a configmap in the tigera-operator namespace named, syslog-ca. Example:
noteSkip this step if publicCA bundle is good enough to verify the server certificates.
kubectl create configmap syslog-ca --from-file=tls.crt -n tigera-operator
Support In this release, only Splunk Enterprise is supported.
Calico Enterprise uses Splunk's HTTP Event Collector to send data to Splunk server. To copy the flow, audit, and dns logs to Splunk, follow these steps:
-
Create a HTTP Event Collector token by following the steps listed in Splunk's documentation for your specific Splunk version. Here is the link to do this for Splunk version 8.0.0.
-
Create a Secret in the
tigera-operator
namespace namedlogcollector-splunk-credentials
with the fieldtoken
. Example:kubectl create secret generic logcollector-splunk-credentials \
--from-literal=token=<splunk-hec-token> \
-n tigera-operator -
Update the LogCollector resource named
tigera-secure
to include a Splunk section with your Splunk information. Example:apiVersion: operator.tigera.io/v1
kind: LogCollector
metadata:
name: tigera-secure
spec:
additionalStores:
splunk:
# Splunk HTTP Event Collector endpoint, in the format protocol://host:port
endpoint: https://1.2.3.4:8088This can be done during installation by editing the custom-resources.yaml by applying it or after installation by editing the resource with the command:
kubectl edit logcollector tigera-secure