spec.hostNetwork: trueare assigned an IP from the Pod network by the CNI plugin.
A CNI plugin has multiple responsibilities:
Currently, "Canal" is the only CNI plugin the MetaKube Core supports.
It is automatically installed in every cluster.
Canal is a combination of Flannel and Calico. Both take over a subset of the above-mentioned responsibilities.
Canal is configured to use "host-local" IPAM.
Flannel creates an overlay over the node network using VXLAN.
Frames are encapsulated into UDP datagrams. They are then transported between nodes on UDP port
8472 and decapsulated on the receiving side.
Flannel creates a
flannel.1 bridge connected to the main ethernet interface of the host.
This bridge has an IP matching the
/24 Pod network slice and a
0 suffix, e.g.
172.25.3.0 when the slice for the node is
$ ip -d a show flannel.1 9: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 22:3d:4f:df:d7:d7 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 vxlan id 1 local 192.168.1.44 dev ens3 srcport 0 0 dstport 8472 nolearning ttl auto ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 172.25.3.0/32 brd 172.25.3.0 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::203d:4fff:fedf:d7d7/64 scope link valid_lft forever preferred_lft forever
Calico creates a pair of
veth devices for each Pod. One in the Pod's network namespace (
eth0@ifx) with the Pod's IP and one on the host side (
calixxxxxxxxxxx@ifx) connected to the bridge.
For each Pod network slice, there's a corresponding route to the bridge on the respective host using the
This means, traffic to Pods is routed through the
flannel.1 interface (and thus encapsulated) and sent to the host where the Pod lives.
$ ip r | grep flannel 172.25.0.0/24 via 172.25.0.0 dev flannel.1 onlink 172.25.1.0/24 via 172.25.1.0 dev flannel.1 onlink 172.25.2.0/24 via 172.25.2.0 dev flannel.1 onlink
Calico implements Kubernetes Network Policies.
You can limit ingress or egress of Pods to only certain destinations to harden your cluster.
For more information on Kubernetes Network Policies, see the official Kubernetes documentation.
Service networking is implemented by kube-proxy.
There are two modes of kube-proxy:
iptables (legacy) and
ipvs is superior to
iptables mode in terms of performance and scaling.
For this reason,
iptables is no longer available for new clusters.
If you want to migrate your existing cluster to
ipvs, please contact the SysEleven Support.
For more information on Kubernetes Services, see the official Kubernetes documentation.
Pods can also directly communicate with other nodes, e.g. node ports or Pods running with
spec.hostNetwork: true and vice-versa.