When we look at customers and the problems they encounter using Kubernetes, one of the most prominent issues they run into is related to CoreDNS (DNS Server). Before CoreDNS came in we had kube-dns. CoreDNS was GA in 1.11 so if you created clusters after 1.11 by default you're getting CoreDNS. Clusters that were created with Kubernetes version 1.10 and earlier shipped with the kube-dns DNS Server.

The Setup

CoreDNS is a fast, extensible and flexible Kubernetes DNS server. CoreDNS can listen for DNS requests coming in over gRPC, UDP/TCP, and TLS. It uses a Core file to control the configuration for CoreDNS. The Core file is stored in a Kubernetes Configmap which you can directly manipulate.

The Challenge

CoreDNS is able to:

  • Serve zone data from a file ( both DNSSEC and DNS are supported );
  • Cache all records except zone transfers and metadata records for fast lookup times;
  • Use etcd as a backend;
  • Implement the Kubernetes DNS-Based Service Discovery Specification;
  • Serve as a proxy to forward queries to another nameserver;
  • Export metrics to Prometheus and any plugin that has them;
  • Provide error logging;
  • Serve zone data from resource record sets in AWS route53;
  • And more functionality using plugins.

In large scale highly trafficked Kubernetes clusters, CoreDNS's memory usage is for the most part affected by the number of Pods and Services in the cluster. Other aspects which affect performance include the size of the filled DNS answer cache, and the rate of queries received (QPS) per CoreDNS instance.

The Event

If you happen to be facing some problems with DNS resolution in your Kubernetes cluster, the first thing that you want to do is to look at your CoreDNS Pods.

EKS runs 2 replicas of CoreDNS by default. In your situation, it might be 1, 2, or 3. Kubernetes on AWS recommends that you run at least 2 at a time.

The Root Cause

If you are facing issues with CoreDNS, first check whether CoreDNS Pods are running.

$ kubectl get pods --namespace=kube-system -l k8s-app=kube-dns
NAME                                     READY   STATUS    RESTARTS  AGE
coredns-79d667b89f-hcwnr  1/1         Running           0          63d
coredns-79d667b89f-qh8cd    1/1         Running          0          63d

If it's running check whether the CoreDNS Service is up.

$ kubectl get svc --namespace=kube-system
kubectl get svc kube-dns --namespace=kube-system
kube-dns  ClusterIP <none>        53/UDP,53/TCP  63d

If your Kubernetes Pods are running and you still are not able to get name resolution or get latency, the next thing I would do is check your service - so there is a CoreDNS service that you can look into and make sure that it is up and running.

If that doesn't work, you can then add logging to your CoreDNS.

Enable Logging in Corends

Add log in Core File.

$ kubectl -n kube-system edit configmap core
apiVersion: v1
kind: ConfigMap
 name: coredns
   namespace: kube-system
     Corefile: |
     .:53 {
         kubernetes cluster.local {
           pods insecure
         prometheus :9153
         proxy . /etc/resolv.conf
         cache 30

By using log, you dump all queries and parts for the reply on standard output. You can then run this command that gets all the Pods that are called CoreDNS and it will show you the output of all the Pods that are running inside your cluster.

$ for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done

2019-09-12T1647.907Z [INFO] CoreDNS-1.2.6
2019-09-12T1647.907Z [INFO] linux/amd64, go1.22.4, 251875j9
linux/amd64, go1.22.4, 251875j9
[INFO] plugin/reload: Running configuration MD5 = 2e2180a5eeb3ebf92a5100ab081a6381
W1002 0656.326426  1 reflector.go:341] watch of *v1.Namespace ended with: too old resource version: 123473 (4874440)
W1002 2207.890987  1 reflector.go:341] watch of *v1.Namespace ended with: too old resource version: 4874440 (5041806)
W1002 2207.893920  1 reflector.go:341] watch of *v1.Service ended with: too old resource version: 123345 (5041806)
W1008 0053.440168  1 reflector.go:341] watch of *v1.Service ended with: too old resource version: 5041806 (6327012)

The Fix

Some of the ways of fixing these problems is;

1. Scale your replicas - You can scale them to 3, 4, or 5. e.g.

$ kubectl -n kube-system scale --replicas=3 deployment/coredns

2. Run CoreDNS as a Daemonset on each Kubernetes Node

Sometimes scaling replicas still doesn't help and so for that, the second option is a local DNS addon that CoreDNS has. It creates a Daemonset that runs on each worker node and all the queries that come from any of those Pods are then sent to this local Daemonset which then enables faster name resolution.                                                  

3. The next thing is checking the memory requirements for CoreDNS

  • CoreDNS comes with some defaults so for example, EKS recommends that at a minimum CoreDNS should have 70Mi requests and they cap it to 170Mi;
  • If you're running a larger cluster, then you should change it and the calculation is pretty simple:

        Memory required in MB

       (Pods + Services) / 1000 + 54

       54 comes from 19Mi which is what the CoreDNS application needs and then 30Mi for caching and then 5Mi for buffer.

4. Another plugin is the AutoPath plugin which has improvements that can improve name resolutions that are external to your cluster.

AutoPath mitigates the ClusterFirst search list penalty whereby it answers Pods in one round trip rather than five. This cuts down the number of DNS queries on the backend from five down to one. Sometimes with the AutoPath plugin, the increase in external name resolution could be 10X. So it is really important that you have this requirement in your cluster. However, the downside is that now you have significantly more memory than your CoreDNS Pod needs - so 4X more memory requirement with AutoPath enabled. It also adds additional load on the Kubernetes API because the API server must track all changes to Pods.


As with all computer systems, failure is inevitable Kubernetes, and perhaps quite normal. Most Kubernetes operators will attest to the fact DNS resolution and the DNS Server are a major cause of these failures. Low latency in DNS resolution will bring your cluster to crawl and worse still, result in more server cascading failures. In this post, we looked at how to troubleshoot DNS issues in Kubernetes and also how to mitigate the same.

By incorporating Kalc in your cluster, you can calculate the impact of scaling on your mission critical cluster components. Kalc will scan your cluster configurations and using an ever increasing knowledge base, it will leverage AI to predict when and how your next failure will unravel. This tool generates detailed reports on your cluster health, therefore, helping you to avert service disruption.