What Happened at KubeCon EU 2026 That Makes This the Tipping Point
KubeCon + CloudNativeCon Europe 2026 took place in Amsterdam in April 2026, drawing more than 13,000 engineers and making it one of the largest cloud-native gatherings to date. The headline observability announcement came from Splunk: the beta launch of OpenTelemetry eBPF Instrumentation (OBI), a zero-code observability solution that captures telemetry directly from the Linux kernel — no code changes, no service restarts, no sidecar agents required.
The technical premise of OBI is straightforward but its implications are significant. Traditional distributed tracing requires developers to instrument their application code with OpenTelemetry SDKs — adding spans, propagating context headers, configuring exporters. This works well for greenfield services but fails for legacy codebases, Go binaries, Rust services, and C++ applications where source code modification is slow or impossible. eBPF changes the model fundamentally: by running programs in the Linux kernel’s extended Berkeley Packet Filter virtual machine, OBI intercepts system calls and network traffic at the kernel level, reconstructing distributed traces and RED (Rate, Errors, Duration) metrics without touching application code.
Splunk’s announcement positions OBI as complementary to existing OpenTelemetry SDKs — it fills visibility gaps in uninstrumented services without duplicating data from services that are already instrumented. The practical effect is full-stack observability coverage across a Kubernetes cluster regardless of the age, language, or instrumentation status of the workloads running on it.
The simultaneous general availability of the Splunk Operator for Kubernetes — a Kubernetes-native tool for deploying and managing Splunk Enterprise with Horizontal Pod Autoscalers and Pod Disruption Budgets — signals that Splunk is no longer treating Kubernetes as an edge deployment target but as its primary operational environment.
What the Cilium 1.19 mTLS Announcement Adds
The second major KubeCon EU 2026 eBPF announcement came from Cilium, the eBPF-based CNI that is now the default networking plugin for GKE, EKS, and AKS. Cilium version 1.19 introduced native mutual TLS support — encrypted, mutually authenticated service-to-service communication — without requiring sidecar containers.
The significance is architectural. Previous approaches to Kubernetes service mesh security (Istio with Envoy sidecars, Linkerd) relied on injecting a proxy container alongside every application pod. This approach adds 50 to 200 MB of memory overhead per pod, introduces a proxy-layer latency of 1 to 5 milliseconds per request, and requires operators to manage proxy lifecycle alongside application lifecycle. Cilium 1.19’s implementation uses eBPF alongside ztunnel — a Rust-based proxy component from Istio — to provide session-specific mTLS at the kernel level, eliminating packet loss during TLS handshakes and improving throughput through data aggregation. Setup requires three steps: enabling ztunnel in Helm charts, deploying ztunnel, and applying a namespace label. No sidecar injection, no proxy lifecycle management.
This matters for enterprises running hundreds or thousands of pods: sidecar overhead at scale is not a configuration detail, it is a capacity planning problem. At 1,000 pods, 100 MB sidecar overhead means 100 GB of memory consumed by proxy infrastructure that contributes zero application functionality. eBPF-native mTLS eliminates that overhead while maintaining the same security guarantees.
Advertisement
What Platform Engineering Teams Should Do Now
1. Evaluate OBI Beta Deployment for Legacy Service Observability Gaps Before GA
Most enterprise Kubernetes clusters have a mix of well-instrumented services (greenfield, built with OpenTelemetry from the start) and dark services (legacy applications, vendor-supplied containers, third-party databases) that have zero observability coverage. OBI’s beta is the opportunity to close those gaps without a code change backlog. The evaluation path is straightforward: deploy OBI as a DaemonSet on a non-production cluster, run it alongside existing OpenTelemetry collectors, and compare trace coverage before and after. CNCF’s OpenTelemetry project has published a KubeCon EU 2026 observability guide that covers OBI integration with existing collector configurations. The evaluation typically reveals 15 to 40 percent of cluster traffic that was previously invisible to distributed tracing — inter-service calls in uninstrumented languages, database query patterns, external API calls from legacy services.
2. Migrate Sidecar-Based Service Mesh to Cilium 1.19 Ambient Mode Before Expanding Kubernetes Cluster Size
If your current service mesh relies on sidecar proxies (Istio + Envoy, or Linkerd), the Cilium 1.19 ambient mTLS release is the migration target that eliminates sidecar overhead at scale. The migration path to Cilium ambient mode has three phases: (1) deploy Cilium as CNI alongside existing mesh for network policy enforcement only, (2) enable ambient mode for new namespaces while running sidecars for legacy namespaces, (3) migrate legacy namespaces incrementally as application teams validate behavior. Meta reduced CPU load by 20% fleet-wide through eBPF-based infrastructure changes of this type. Cloudflare processes 10 million packets per second using eBPF for DDoS mitigation. Datadog cut CPU usage by 35% by switching from traditional agents to eBPF-based collection. These are not benchmark numbers — they are production outcomes from organizations that completed migrations.
3. Standardize on OpenTelemetry Collector as the Single Telemetry Pipeline — OBI, SDK Spans, and Logs into One Collector
The most common enterprise observability failure mode is telemetry fragmentation: separate agents for logs (Fluentd, Filebeat), separate agents for metrics (Prometheus node exporter, Datadog agent), separate agents for traces (Jaeger agent, Zipkin), and now potentially a separate eBPF agent for kernel-level data. Each agent competes for node resources, each has a separate configuration lifecycle, and none of the data is correlated by default. KubeCon EU 2026’s Splunk announcement included beta support for native log ingestion via OpenTelemetry Protocol — meaning a single OpenTelemetry Collector can now aggregate traces (from OBI and SDK instrumentation), metrics (from Prometheus exporters), and logs (from application stdout and kernel events) into a unified pipeline. This is the architecture to standardize on before OBI reaches GA: one DaemonSet, one pipeline, full-stack telemetry from kernel to application layer.
Where eBPF Goes From Here in Enterprise Environments
The KubeCon EU 2026 announcements are the public-facing confirmation of a shift that has been building for three years. The eBPF Foundation — backed by the Linux Foundation, with members including Meta, Google, Microsoft, Netflix, and Isovalent — has driven standardization that makes eBPF programs portable across Linux kernel versions and cloud provider environments. Cilium is now the default CNI for all three major managed Kubernetes services (GKE, EKS, AKS). OBI is in beta with a 1.0 GA roadmap. The CNCF TAGs survey showing 67% enterprise eBPF adoption is consistent with the industry trajectory.
What changes at GA is not the technology but the support model. OBI at beta requires platform engineers with Linux kernel knowledge to troubleshoot eBPF program behavior. At GA, Splunk’s enterprise support contract covers OBI — which means a CISO can approve it for production, a compliance officer can audit it against SOC2 and ISO 27001 requirements, and a procurement team can include it in a standard vendor contract. That enterprise legitimacy layer — support, indemnification, compliance documentation — is what separates a technology that 67% of teams experiment with from a technology that 67% of teams run in production at their most critical services.
Frequently Asked Questions
What is the minimum Linux kernel version required to run eBPF-based observability tools like OBI?
OBI and Cilium 1.19 require Linux kernel 5.10 or higher for full feature support, including BPF CO-RE (Compile Once, Run Everywhere) portability. Kernel 5.15 LTS is the recommended baseline. Most cloud-managed Kubernetes services (GKE, EKS, AKS) run kernel versions above 5.15 by default. On-premises Kubernetes clusters on older enterprise Linux distributions (RHEL 7, CentOS 7) cannot run eBPF-based observability tools and require OS upgrade before adoption. Red Hat Enterprise Linux 8+ (kernel 4.18 with backported BPF features) provides partial support; RHEL 9 provides full support.
Does eBPF-based observability create security risks by running code in the Linux kernel?
eBPF programs run in a sandboxed kernel virtual machine with a formal verifier that checks every program before loading it, ensuring it cannot crash the kernel, loop infinitely, or access unauthorized memory. This is fundamentally different from kernel modules (which run with full kernel privileges and can cause system crashes). Production-grade eBPF tools like Cilium, OBI, and Tetragon have their programs reviewed by Linux kernel maintainers and ship with signed BPF bytecode that cannot be tampered with after build. The risk profile is lower than traditional kernel module-based networking or monitoring tools, not higher.
How does OBI handle service-to-service calls that cross namespace or cluster boundaries?
OBI captures trace context at the kernel network layer and reconstructs distributed trace spans for traffic entering and leaving pods. For cross-namespace calls within the same cluster, OBI correlates traces using network socket metadata. For cross-cluster calls, OBI supports W3C TraceContext propagation when the calling service includes a traceparent header — OBI on the receiving cluster picks up the context and continues the trace. For calls from completely uninstrumented external services that do not propagate trace context, OBI creates new root spans at the receiving service boundary, providing local visibility but without cross-system correlation.
Sources & Further Reading
- Splunk Introduces OpenTelemetry eBPF Instrumentation at KubeCon EU 2026 — Cloud Native Now
- KubeCon EU 2026: Kubernetes Matures, BSD, eBPF, and mTLS — Heise
- Splunk KubeCon EU 2026 Observability Innovations — Splunk Blog
- eBPF for Kubernetes Observability: Zero-Instrumentation Monitoring in 2026 — Gheware DevOps
- OpenTelemetry at KubeCon + CloudNativeCon Europe 2026 — OpenTelemetry Blog
















