Kubernetes Flow Exporter#

Added in version 19.7.0: Currently in Beta.

Kubernetes Flow Exporter is a high-performance monitoring tool designed to export network flow data from a Kubernetes cluster. It uses eBPF to capture container network traffic and exports flow data and cluster vitals as IPFIX with comprehensive Kubernetes metadata.

Features#

Designed to provide deep visibility into Kubernetes network traffic and cluster health, the following are the key capabilities of a Kubernetes Flow Exporter:

  • eBPF-based traffic capture: Captures packets directly in the kernel for high efficiency with minimal overhead

  • Kubernetes integration: Automatically enriches flow data with pod, namespace, container, node, and service metadata

  • Dual export capability: Delivers both network flow data and Kubernetes vitals (pods, nodes, workloads, jobs, etc.)

  • IPFIX export: Standards-compliant flow export with custom enterprise fields for Kubernetes context

  • Container-aware monitoring: Monitors all container network interfaces on Kubernetes nodes

  • Scalable architecture: Runs as a DaemonSet for cluster-wide coverage

  • Observability: Includes built-in Prometheus metrics and comprehensive health endpoints for easy monitoring

  • Flexible configuration: Environment-based configuration for quick and customized deployments

  • Metrics server integration: Automatically detects and integrates with the Kubernetes metrics server

Prerequisites#

The following details will be required to enable/configure Kubernetes integration:

  • Kubernetes cluster with kernel 4.18+ (eBPF support required)

  • Privileged container support

  • [Optional] Kubernetes metrics server for resource usage data

Required privileges#

The container requires privileged access for eBPF operations:

  • NET_ADMIN: For network interface manipulation

  • SYS_ADMIN - For eBPF program loading

  • BPF - For eBPF syscalls

  • PERFMON - For performance monitoring

  • SYS_RESOURCE - For resource management

Role-Based Access Control (RBAC) permissions#

The following are the minimum API permissions required for Kubernetes:

  • pods: Read pod information and status

  • nodes: Read node information

  • namespaces: Read namespace metadata

  • services/endpoints: Read service information

  • deployments/statefulsets/daemonsets: Read workload information

  • jobs/cronjobs: Read job information

  • horizontalpodautoscalers: Read HPA information

Configuring the Kubernetes Flow Exporter#

All configuration is managed through the values.yaml file or by using --set parameters during installation. A ConfigMap is automatically generated from these values, so manual edits to ConfigMaps are not required.

The following tables show the configurable parameters and their default settings.

IPFIX configuration#

Parameter

Description

Default

ipfix.collectorHost

IPFIX collector hostname/IP

10.42.1.39

ipfix.collectorPort

IPFIX collector port

2055

ipfix.exportInterval

Flow export interval (seconds)

60

ipfix.flowTimeout

Flow timeout (seconds)

300

Vitals configuration#

Parameter

Description

Default

vitals.enabled

Enable/disable vitals collection

TRUE

vitals.collectionInterval

Collection interval (seconds)

60

vitals.exportInterval

Export interval (seconds)

60

vitals.cacheTimeout

Cache timeout (seconds)

120

vitals.metricsServerCheck

Metrics server check interval

300

Application configuration#

Parameter

Description

Default

app.logLevel

Log level (debug, info, warn, error)

info

app.logFilePath

Optional log file path (leave empty for stdout)

app.metricsPort

Prometheus metrics server port

8080

app.healthPort

Health check port

8081

eBPF configuration#

Parameter

Description

Default

ebpf.bufferSize

eBPF buffer size

1024

ebpf.interfaceFilter

Interface filter pattern (leave empty for all container interfaces)

Deploying the Kubernetes Flow Exporter#

The Kubernetes Flow Exporter is deployed using Helm and must be installed on the deploy host. For more information, see the Helm Quickstart Guide.

Once Helm is installed, run the following command on the deployment host:

helm repo add plixer https://files.plixer.com/helm/k8s-flow-exporter
helm repo update
helm install k8s-flow-exporter plixer/k8s-flow-exporter --set ipfix.collectorHost=<YOUR-IPFIX-COLLECTOR-HERE>

OR to target a local helm template directory:

helm install k8s-flow-exporter ./helm-chart \
--set ipfix.collectorHost=<YOUR-IPFIX-COLLECTOR-HERE>

Note

Replace <YOUR-IPFIX-COLLECTOR-HERE> with the Scrutinizer IP address where flows should be sent.

Export data types#

The Kubernetes Flow Exporter mainly provides two types of data:

Network flow data

Captured directly with eBPF, network flow records show comprehensive Kubernetes metadata including pod, service, and node information.

Kubernetes vitals data

In addition to flows, the exporter periodically collects Kubernetes vitals, which are performance and health metrics from the cluster. This provides a resource and workload-level view to complement traffic data.

Collected vitals include:

  • Pod vitals: CPU/memory usage, resource requests/limits, restart counts, readiness status

  • Node vitals: Capacity, allocatable resources, usage metrics, node conditions

  • Workload vitals: Deployment, StatefulSet, DaemonSet replica status and health

  • Job vitals: Job and CronJob execution status and timing

  • HPA vitals: Horizontal Pod Autoscaler metrics and scaling status

Monitoring and observability#

The Kubernetes Flow Exporter provides several ways to track its health, performance, and activity, and can easily integrate with monitoring tools like Prometheus or external collectors.

Health endpoints

Built-in HTTP endpoints to verify service health and inspect internal state:

  • GET /health - Returns service health status

  • GET /status - Returns detailed component statistics including vitals collector status

  • GET /vitals - Returns real-time snapshot of Kubernetes vitals currently being collected (pods, nodes, workloads)

Prometheus metrics
  • ebpf_netflow_flows_processed_total - Total number of flows processed, with labels for success/failure status

  • ebpf_netflow_flows_exported_total - Total number of flows successfully exported to the collector

  • ebpf_netflow_metadata_enrichment_total - Total count of enriched versus missing metadata records

  • ebpf_netflow_map_size - Current eBPF map size

  • ebpf_netflow_exporter_connected - Exporter connection status (1=connected, 0=disconnected)

Logs

Logs are exported in a structured JSON format to make them machine-readable and easier to parse with logging systems.

Key log events include:

  • eBPF program loading and attachment

  • Flow processing and export statistics

  • Kubernetes metadata cache updates

  • Connection status to NetFlow collector

  • Vitals collection and export statistics

  • Metrics server availability status

Troubleshooting#

The following are the common issues that may be encountered when running the K8s Flow Exporter, along with recommended steps to help diagnose problems.

eBPF program load failures#

If the eBPF program fails to load, verify if your kernel version supports eBPF (4.18+ recommended):

sudo uname -r
sudo ls /sys/kernel/debug/tracing/

No flows exported#

If there is no flow data sent to your collector:

Confirm that the exporter has successfully attached to network interfaces:

kubectl logs -l app=k8s-flow-exporter -n kube-system | grep "Attached eBPF"

Verify the connectivity of the exporter to the collector:

kubectl exec -it <pod-name> -n kube-system -- nc -u <collector-ip> <collector-port>

Missing Kubernetes metadata#

If flow records are missing pod, namespace, or service information:

Verify if the exporter’s RBAC permissions allow access to Kubernetes resources:

kubectl auth can-i get pods --as=system:serviceaccount:kube-system:k8s-flow-exporter

Check the metadata cache for status and errors:

curl http://<pod-ip>:8081/status

Vitals collection issues#

If Kubernetes vitals (pods, nodes, workloads) are not being exported:

Verify the vitals collector is running and reporting data:

curl http://<pod-ip>:8081/vitals

Check that the Kubernetes metrics server is available:

kubectl top nodes
kubectl logs -l app=k8s-flow-exporter -n kube-system | grep "metrics server"