Konnectivity has replaced OpenVPN in all MetaKube clusters.


There are several scenarios where the Kubernetes API-Server needs to be able to reach Pods in the cluster,
virtual Service IPs or Kubelet running on the Kubernetes Nodes, e.g.:

  • When a client runs kubectl exec | logs | port-forward | cp | attach | debug
  • Contacting an admission webhook that is deployed in the cluster

In the case of MetaKube, where the control plane runs in an isolated network, these requests cannot be routed directly, since the nodes may not be accessible to the internet.
Thus, this necessitates some kind of tunnel.

Apiserver network proxy aka Konnectivity

Instead of tunneling arbitrary network traffic to the cluster, Konnectivity provides kube-apiserver with tunnels that take the place of the TCP connection that's normally used for a request.

The clients, running in the cluster, establish TCP connections to the servers.
These are then used to tunnel the requests from kube-apiserver to the destination Pod, Service or Kubelet.

This has several advantages over other methods, such as VPN:

  • Higher bandwidth
    • The outer connections use TLS (with an efficient encryption algorithm)
    • Inner traffic may remain unencrypted
    • Components can run multithreaded
    • Connections get multiplexed on the tunnels, saving TCP connections
  • Lower complexity
    • Single server & client component
    • No NAT or additional routing
  • Highly available
    • Horizontally scalable


  • metrics-server can't reach Kubelet endpoints directly, hence needs to run in the cluster.