Skip to main content
Calico Open Source 3.28 (latest) documentation

Prometheus metrics

Felix can be configured to report a number of metrics through Prometheus. See the configuration reference for how to enable metrics reporting.

Metric reference​

Felix specific​

Felix exports a number of Prometheus metrics. The current set is as follows. Since some metrics are tied to particular implementation choices inside Felix we can't make any hard guarantees that metrics will persist across releases. However, we aim not to make any spurious changes to existing metrics.

NameDescription
felix_active_local_endpointsNumber of active endpoints on this host.
felix_active_local_policiesNumber of active policies on this host.
felix_active_local_selectorsNumber of active selectors on this host.
felix_calc_graph_output_eventsNumber of events emitted by the calculation graph.
felix_calc_graph_update_time_secondsSeconds to update calculation graph for each datastore OnUpdate call.
felix_calc_graph_updates_processedNumber of datastore updates processed by the calculation graph.
felix_cluster_num_host_endpointsTotal number of host endpoints cluster-wide.
felix_cluster_num_hostsTotal number of Calico hosts in the cluster.
felix_cluster_num_workload_endpointsTotal number of workload endpoints cluster-wide.
felix_exec_time_microsSummary of time taken to fork/exec child processes
felix_int_dataplane_addr_msg_batch_sizeNumber of interface address messages processed in each batch. Higher values indicate we're doing more batching to try to keep up.
felix_int_dataplane_apply_time_secondsTime in seconds that it took to apply a dataplane update.
felix_int_dataplane_failuresNumber of times dataplane updates failed and will be retried.
felix_int_dataplane_iface_msg_batch_sizeNumber of interface state messages processed in each batch. Higher values indicate we're doing more batching to try to keep up.
felix_int_dataplane_messagesNumber dataplane messages by type.
felix_int_dataplane_msg_batch_sizeNumber of messages processed in each batch. Higher values indicate we're doing more batching to try to keep up.
felix_ipset_callsNumber of ipset commands executed.
felix_ipset_errorsNumber of ipset command failures.
felix_ipset_lines_executedNumber of ipset operations executed.
felix_ipsets_calicoNumber of active Calico IP sets.
felix_ipsets_totalTotal number of active IP sets.
felix_iptables_chainsNumber of active iptables chains.
felix_iptables_lines_executedNumber of iptables rule updates executed.
felix_iptables_restore_callsNumber of iptables-restore calls.
felix_iptables_restore_errorsNumber of iptables-restore errors.
felix_iptables_rulesNumber of active iptables rules.
felix_iptables_save_callsNumber of iptables-save calls.
felix_iptables_save_errorsNumber of iptables-save errors.
felix_resync_stateCurrent datastore state.
felix_resyncs_startedNumber of times Felix has started resyncing with the datastore.
felix_route_table_list_secondsTime taken to list all the interfaces during a resync.
felix_route_table_per_iface_sync_secondsTime taken to sync each interface

Prometheus metrics are self-documenting, with metrics turned on, curl can be used to list the metrics along with their help text and type information.

curl -s http://localhost:9091/metrics | head

Example response:

# HELP felix_active_local_endpoints Number of active endpoints on this host.
# TYPE felix_active_local_endpoints gauge
felix_active_local_endpoints 91
# HELP felix_active_local_policies Number of active policies on this host.
# TYPE felix_active_local_policies gauge
felix_active_local_policies 0
# HELP felix_active_local_selectors Number of active selectors on this host.
# TYPE felix_active_local_selectors gauge
felix_active_local_selectors 82
...

Label indexing metrics​

The label index is a subcomponent of Felix that is responsible for calculating the set of endpoints and network sets that match each selector that is in an active policy rule. Policy rules are active on a particular node if the policy they belong to selects a workload or host endpoint on that node with its top-level selector (in spec.selector). Inactive policies have minimal CPU cost because their selectors do not get indexed.

Since the label index must match the active selectors against all endpoints and network sets in the cluster, its performance is critical and it supports various optimizations to minimize CPU usage. Its metrics can be used to check that the optimizations are active for your policy set.

felix_label_index_num_endpoints​

Reports the total number of endpoints (and similar objects such as network sets) being tracked by the index. This should match the number of endpoints and network sets in your cluster.

felix_label_index_num_active_selectors{optimized="true|false"}​

Reports the total number of active selectors, broken into optimized="true" and optimized="false" sub-totals.

The optimized="true" total tracks the number of selectors that the label index was able to optimize. Those selectors should be calculated efficiently even in clusters with hundreds of thousands of endpoints. In general the CPU used to calculate them should be proportional to the number of endpoints that match them and the churn rate of those endpoints.

The optimized="false" total tracks the number of selectors that could not be optimized. Unoptimized selectors are much more costly to calculate; the CPU used to calculate them is proportional to the number of endpoints in the cluster and their churn rate. It is generally OK to have a handful of unoptimized selectors, but if many selectors are unoptimized the CPU usage can be substantial at high scale.

For more information on writing selectors that can be optimized, see the this section of the NetworkPolicy reference.

felix_label_index_selector_evals{result="true|false"}​

Counts the total number of times that a selector was evaluated vs an endpoint to determine if it matches, broken down by match (true) or no-match (false). The ratio of match to no-match shows how effective the selector indexing optimizations are for your policy set. The more effectively the label index can optimize the selectors, the fewer "no-match" results it will report relative to "match".

If you have more than a handful of active selectors and felix_label_index_selector_evals{result="false"} is many times felix_label_index_selector_evals{result="true"} then it is likely that some selectors in the policy set are not being optimized effectively.

felix_label_index_strategy_evals{strategy="..."}​

This is a technical statistic that shows how many times the label index has employed each optimization strategy that it has available. The strategies will likely evolve over time but, at time of writing, they are as follows:

  • endpoint-full-scan: the least efficient fall back strategy for unoptimized selectors. The index scanned all endpoints to find the matches for a selector.

  • endpoint|parent-no-match: the most efficient strategy; the index was able to prove that nothing matched the selector so it was able to skip the scan entirely.

  • endpoint|parent-single-value: the label index was able to limit the scan to only those endpoints/parents that have a particular label and value combination. For example, selector label == "value" would only scan items that had exactly that label set to "value".

  • endpoint|parent-multi-value: the label index was able to limit the scan to only those endpoints/parents that have a particular label and one of a few values. For example, selector label in {"a", "b") would only scan items that had exactly that label with one of the given values.

  • endpoint|parent-label-name: the label index was able to limit the scan to only those endpoints/parents that hava a particular label (but was unable to limit it to a particular subset of values). For example, has(label) would result in that kind of scan.

Terminology: here "endpoint" means "endpoint or NetworkSet" and "parent" is Felix's internal name for resources like Kubernetes Namespaces. A "parent" scan means that the label index scanned all endpoints that have a parent matching the strategy.

CPU / memory metrics​

Felix also exports the default set of metrics that Prometheus makes available. Currently, those include:

NameDescription
go_gc_duration_secondsA summary of the GC invocation durations.
go_goroutinesNumber of goroutines that currently exist.
go_memstats_alloc_bytesNumber of bytes allocated and still in use.
go_memstats_alloc_bytes_totalTotal number of bytes allocated, even if freed.
go_memstats_buck_hash_sys_bytesNumber of bytes used by the profiling bucket hash table.
go_memstats_frees_totalTotal number of frees.
go_memstats_gc_sys_bytesNumber of bytes used for garbage collection system metadata.
go_memstats_heap_alloc_bytesNumber of heap bytes allocated and still in use.
go_memstats_heap_idle_bytesNumber of heap bytes waiting to be used.
go_memstats_heap_inuse_bytesNumber of heap bytes that are in use.
go_memstats_heap_objectsNumber of allocated objects.
go_memstats_heap_released_bytes_totalTotal number of heap bytes released to OS.
go_memstats_heap_sys_bytesNumber of heap bytes obtained from system.
go_memstats_last_gc_time_secondsNumber of seconds since 1970 of last garbage collection.
go_memstats_lookups_totalTotal number of pointer lookups.
go_memstats_mallocs_totalTotal number of mallocs.
go_memstats_mcache_inuse_bytesNumber of bytes in use by mcache structures.
go_memstats_mcache_sys_bytesNumber of bytes used for mcache structures obtained from system.
go_memstats_mspan_inuse_bytesNumber of bytes in use by mspan structures.
go_memstats_mspan_sys_bytesNumber of bytes used for mspan structures obtained from system.
go_memstats_next_gc_bytesNumber of heap bytes when next garbage collection will take place.
go_memstats_other_sys_bytesNumber of bytes used for other system allocations.
go_memstats_stack_inuse_bytesNumber of bytes in use by the stack allocator.
go_memstats_stack_sys_bytesNumber of bytes obtained from system for stack allocator.
go_memstats_sys_bytesNumber of bytes obtained by system. Sum of all system allocations.
process_cpu_seconds_totalTotal user and system CPU time spent in seconds.
process_max_fdsMaximum number of open file descriptors.
process_open_fdsNumber of open file descriptors.
process_resident_memory_bytesResident memory size in bytes.
process_start_time_secondsStart time of the process since unix epoch in seconds.
process_virtual_memory_bytesVirtual memory size in bytes.

Wireguard Metrics​

Felix also exports wireguard device stats if found/detected. Can be disabled via Felix configuration.

NameDescription
wireguard_metaGauge. Device / interface information for a felix/calico node, values are in this metric's labels
wireguard_bytes_rcvdCounter. Current bytes received from a peer identified by a peer public key and endpoint
wireguard_bytes_sentCounter. Current bytes sent to a peer identified by a peer public key and endpoint
wireguard_latest_handshake_secondsGauge. Last handshake with a peer, unix timestamp in seconds.