By default, MetaKube creates one Server Group in OpenStack for each cluster.
Its policy is "soft-anti-affinity".
This means, OpenStack tries to spread servers on different hardware nodes.
In case it cannot find a suitable hardware node, it schedules a server anyway.
The policy also applies to server migrations.
You may want to have more control over which of your nodes may share the same hardware and which shouldn't.
Or have a strict anti-affinity policy.
To do that you may create additional server groups yourself and specify them in your machine-deployments.
Whenever a Pod is created, its spec.nodeName
field is initially empty.
It's the scheduler's role to choose a fitting Node and fill out this field.
Before scoring each Node and choosing the best one, it discards certain Nodes right away based on the Pod's and Node's specifications.
When you specify spec.template.spec.taints
in a MachineDeployment, Kubelet will register the node with these taints.
Specify tolerations in your Pods to allow them to be scheduled on nodes with matching taints.
Any labels in spec.template.spec.metadata.labels
of a MachineDeployment will be added to the corresponding nodes.
Use these labels with nodeSelectors, affinity or topologySpreadConstraints rules.
MetaKube also adds the machine-controller/physical-host-id
label to the nodes.
It contains a reference to the hardware node the server is on.