DNS

Architecture / data path

Each DNS query from a Kubernetes Pod takes the following path:

  1. Kubelet configures containers in Pods with node-local-dns as resolver. It's running locally on each Kubernetes node.
  2. Queries records in the cluster.local zone are forwarded to CoreDNS or responded to from the cache.
  3. For other zones, queries are forwarded to the host's default resolver (see /etc/resolv.conf). See below.

Node local DNS cache

A large amount of DNS queries can overload CoreDNS or lead to higher latency.
That's why MetaKube Core installs a local DNS cache resolver on each node.

  • It's deployed through the node-local-dns DaemonSet in the kube-system namespace and also runs CoreDNS, but with different configuration.
  • The server listens on the fixed address 169.254.20.10:53 with UDP & TCP on a dummy interface nodelocaldns.
  • Containers are configured by Kubelet to use the node local DNS cache by an entry in their /etc/resolve.conf file.
  • Queries for names in the cluster.local domain are forwarded to CoreDNS and results cached.
  • Queries for other names are forwarded based on the host's /etc/resolv.conf configuration, see external names.

For additional configuration, see extending CoreDNS configuration below.

CoreDNS

  • CoreDNS implements the Kubernetes DNS service.
  • It's deployed in the cluster through the Deployment coredns in the kube-system namespace.
  • The Deployment is automatically horizontally scaled based its load.
  • The virtual IP of the CoreDNS Service is always the 10th IP in the Service network (default 10.240.16.10).

For additional configuration, see extending CoreDNS configuration below.

You can find more information about the kinds of records CoreDNS serves and how you can use CoreDNS in the official Kubernetes documentation.

External names

Queries for names not in the cluster.local domain are not answered by CoreDNS.
They use the following resolvers/servers:

  • Local DNS resolver (systemd-resolved) listening on 127.0.0.53:53
  • DNS servers of the OpenStack subnet configured through DHCP

Queries coming from the host directly (not from Pods) don't go to CoreDNS.
E.g. dig kubernetes.svc.cluster.local resolves from a Pod, but not the host.

Extending the CoreDNS configuration

You can extend the configuration of both, node local DNS cache or CoreDNS.
To do so, create a Config Map coredns-extra-configs or node-local-dns-extra-configs respectively in the kube-system namespace.

It must contain a valid CoreDNS style configuration file under the Corefile key.

Example manifest

apiVersion: v1
kind: ConfigMap
metadata:
  name: node-local-dns-extra-configs
  namespace: kube-system
data:
  Corefile: |
    example.com {
      bind 169.254.20.10
      forward . 1.2.3.4
    }

With this configuration, node local DNS will forward queries for names under the example.com domain to 1.2.3.4.

Caveats

  • Changes to the Config Maps won't immediately take effect.
    To load the new configuration, you must recreate the Pods:

    kubectl --namespace kube-system rollout restart deployment coredns
    kubectl --namespace kube-system rollout restart daemonset node-local-dns
  • When you add another listener to the node local DNS cache, you must use the bind directive to only listen on the nodelocaldns interface or its IP.
    If not, CoreDNS will listen on the wildcard host (all addresses) which includes the local cache resolver of systemd-resolved.
    Since that's already listening on port 53 on 127.0.0.53, CoreDNS will fail, complaining that the port is already in use.

References