Every MetaKube Node has one address of type InternalIP
: the private IP in the OpenStack network of the cluster.
They have another address of type ExternalIP
in case the MachineDeployment is configured to use floating IPs.
The CNI plugin is installed as a DaemonSet, so a Pod runs on each node.
The nodes must facilitate the Pod traffic (VXLAN overlay or direct routing) and allow for the respective connections.
MetaKube can manage floating IPs for each server of a MachineDeployment.
This has certain implications you should consider:
Node acts as NAT gateway directly
The source IP of nodes and its Pods is distinct from other Nodes.
This may be desirable e.g. to avoid certain IP based rate limits of certain APIs.
Secondly, it avoids port collision for egress at the shared NAT gateway (router).
Node ports are open to the public
This may be intentional and the reason to use floating IPs in the first place.
But also consider other ports you may not want to expose.
Additional cost for floating IPs
If you need all egress from your cluster to be from a well-known CIDR, you may consider a dedicated floating IP Pool.
This behavior is deprecated.
We are looking to replace this functionality with a more explicit mechanism.
When there's free floating IPs in the project, MetaKube will first attempt to use these to associate with machine ports.
The nodes must be able to communicate with the following peers.
MetaKube does not restrict any egress by default
Traffic from these following peers are enabled by default through security group rules