A method to override external domain name resolution in coredns

Untitled design (56)
There are occasions when you must have more control over the domain name resolution that is taking place inside a Kubernetes cluster.
For example, imagine the following scenarios.
  1. You are trying to replicate an issue that you observe in a different environment, in your dev K8s cluster. To successfully replicate the issue, you need to ensure that you use the same hostnames, application FQDNs that were used in the original environment.
  2. You are testing an application running on Kubernetes that requires access to third party external endpoints over the internet. You need to ensure that these third-party external connections are created towards some known, test services, not the actual ones.
  3. You need an alternative and a more centralized way to control the responses received to DNS queries by the application pods. (e.g. You do not like to use “hostAliases” option with K8s pods)
On all the above occasions, you can use the following method to override the responses sent by the internal Kubernetes DNS to your application pods.
Step 1

First, we are going to run a “dnsmasq” server in the same Kubernetes cluster as a pod. It will be associated with a configmap through which we can manipulate the DNS records. Other than that, the dnsmasq pod will be exposed via a clusterIP type Kubernetes service.

To run a “dnsmasq” server as a container, we can first create docker image using following Dockerfile.

.
FROM alpine:latest
RUN apk –no-cache add dnsmasq
VOLUME /var/dnsmasq
EXPOSE 53 53/udp
ENTRYPOINT [“dnsmasq”,”-d”,”-C”,”/var/dnsmasq/conf/dnsmasq.conf”,”-H”,”/var/dnsmasq/hosts”]
 

As you may notice in the ENTRYPOINT of above, we are running “dnsmasq” in foreground (-d) as well as we pass a configuration file (-C) and a hosts (-H) file to it. During the build, these files do not necessarily need to exist in the docker image. However, during the runtime we will mount relevant configuration files to /var/dnsmasq/conf/dnsmasq.conf and /var/dnsmasq/hosts location accordingly. 

Now you can build the docker image with “docker build –t dnsmasq:latest ” command. You may push it to a docker image repository that is accessible by your Kubernetes cluster, afterwards

Step 2

Now let us create a couple of configmaps to pass the dnsmasq.conf and hosts files to the running dnsmasq pod.

In below example, we are creating a fake DNS record, that resolves www.zinkworks.com to 10.100.1.1 address. Also, we will create a host’s file record that maps example.zinkworks.com to 10.100.1.2

 
#dnsmasq.conf file 
kind: ConfigMap 
apiVersion: v1 
metadata: 
  name: dnsmasq-conf 
  labels: 
    app: dnsmasq 
data:  
  dnsmasq.conf: | 
    address=/www.zinkworks.com/10.100.1.1 
 
#dnsmasq hosts file 
kind: ConfigMap 
apiVersion: v1 
metadata: 
  name: dnsmasq-hosts 
  labels: 
    app: dnsmasq 
data:  
  hosts: | 
  10.100.1.2 example.zinkworks.com 

 

You can create above configmaps in the namespace where you will run the dnsmasq pod, using “kubectl create –f“or “kubectl apply –f” commands. 

Next let us create a dnsmasq pod using the docker image built in step 1. We will also mount the configmaps created previously to /var/dnsmasq/conf/dnsmasq.conf and /var/dnsmasq/hosts as files. 

 
apiVersion: v1 
kind: Pod 
metadata: 
  labels: 
    app: dnsmasq 
  name: dnsmasq 
spec: 
  containers: 
  – image: dnsmasq:latest 
     name: dnsmasq 
     ports: 
     – containerPort: 53 
       protocol: UDP 
       name: udp-53 
    volumeMounts: 
    – name: dnsmasq-conf 
      mountPath: “/var/dnsmasq/conf” 
    – name: dnsmasq-hosts 
      mountPath: “/var/dnsmasq” 
    securityContext: 
      capabilities: 
        add: 
        – NET_ADMIN 
  dnsPolicy: None 
  dnsConfig: 
    nameservers: 
    – “8.8.8.8” 
  restartPolicy: Always 
  volumes: 
  – name: dnsmasq-conf 
    configMap: 
      defaultMode: 0666 
      name:  dnsmasq-conf 
  – name: dnsmasq-hosts 
    configMap: 
      defaultMode: 0666 
      name:  dnsmasq-hosts 

Above is an example pod definition to run the dnsmasq container with desired configurations. Notice how, dnsmasq-conf and dnsmasq-hosts configmaps are mounted as files to the pod. Also notice the “NET_ADMIN” capability given to the container. This will allow you to run the container with UDP port 53.  

You can create the pod by running “kubectl create –f” or “kubectl apply –f” commands on above pod definition. 

Once the dnsmasq pod is created, you can confirm if it is running fine by looking at the pod logs.

Here is an example output. In this case, it is assumed that dnsmsq pod is created in dnsmasq namespace. 

$ kubectl logs –namespace dnsmasq dnsmasq –f 
 
dnsmasq: started, version 2.86 cachesize 150 
dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-UBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-cryptohash no-DNSSEC loop-detect inotify dumpfile 
dnsmasq: reading /etc/resolv.conf 
dnsmasq: using nameserver 8.8.8.8#53 
dnsmasq: read /etc/hosts – 7 addresses 
dnsmasq: read /var/dnsmasq/hosts – 1 addresses 

Finally, we will expose the dnsmasq pod via a kubernetes service. 

 
apiVersion: v1 
kind: Service 
metadata: 
  name: dnsmasq 
  labels: 
    app: dnsmasq 
spec: 
  type: ClusterIP 
  ports: 
  – name: udp-53 
     targetPort: udp-53 
     port: 53 
     protocol: UDP 
  selector: 
    app: dnsmasq 

In the future steps, we may need to use the service IP assigned to the dnsmsq service to send our DNS queries to the dnsmasq container. To find the service IP of the dnsmasq service, you can run “kubectl get svc” command on the namespace where you run dnsmasq pod.  

In our example the dnsmasq service IP is 10.108.96.48. 

$ kubectl get svc –namespace dnsmasq 
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE 
dnsmasq   ClusterIP   10.108.96.48   <none>        53/UDP    15s 

 

You can further check if the domain name resolution is working as expected in the dnsmasq container by running a few nslookup commands from a different pod. 

 In below example, we are running a pod called ubuntu-debugger in the same cluster from which we can check the responses sent by the dnsmasq. It is assumed that ubuntu-debugger pod has nslookup utility. 

 $ kubectl exec -it ubuntu-debugger — /bin/bash 
root@ubuntu-debugger:/# nslookup www.zinkworks.com 
Server: 10.96.0.10 
Address: 10.96.0.10#53 

  

Non-authoritative answer: 
Name: www.zinkworks.com 
Address: 46.101.47.85 

  

root@ubuntu-debugger:/# nslookup www.zinkworks.com 10.108.96.48 
Server: 10.108.96.48 
Address: 10.108.96.48#53 

  

Name: www.zinkworks.com 
Address: 10.100.1.1 

  

root@ubuntu-debugger:/# nslookup www.google.com 10.108.96.48 
Server: 10.108.96.48 
Address: 10.108.96.48#53 

  

Non-authoritative answer: 
Name: www.google.com 
Address: 142.251.43.4 
Name: www.google.com 
Address: 2a00:1450:400f:804::2004 

Notice how in the first nslookup command for www.zinkworks.com has returned the actual public IP instead of the fake one we provided to the dnsmasq container in dnsmasq.conf file. This is because this external DNS queries are handled by the coredns and they are by default forwarded to the nameservers specified in /etc/resolv.conf file. 

 Now in the subsequent nslookup commands we specify the service IP of the dnsmasq service which is 10.108.96.48. This ensures that the DNS query is processed by our dnsmasq container directly. As you notice in the second nslookup command for www.zinkworks.com, We have received the fake IP we configured in the dnsmasq container through the dnsmasq.conf file.  

Any other query that dnsmasq service cannot answer will be sent to the next nameserver in the chain. In our case it is 8.8.8.8 

Step 3

Now as you noticed in step 2 above, all external DNS queries are by default not forwarded to our dnsmasq container, unless we specify the nameserver in our query. This is not convenient. Therefore, next we look at how we can configure the coredns to forward all our external DNS queries to dnsmasq container by default. 

First, we open the coredns configmap in edit mode by running the following command. 

$ kubectl edit configmap –namespace kube-system coredns 

This will open a similar configuration as shown below on your editor. 

# Please edit the object below. Lines beginning with a ‘#’ will be ignored, 
# and an empty file will abort the edit. If an error occurs while saving this file will be 
# reopened with the relevant failures. 
# 
apiVersion: v1 
data: 
  Corefile: | 
    .:53 { 
        errors 
        health { 
           lameduck 5s 
        } 
        ready 
        kubernetes cluster.local in-addr.arpa ip6.arpa { 
           pods insecure 
           fallthrough in-addr.arpa ip6.arpa 
           ttl 30 
        } 
        prometheus :9153 
        forward . /etc/resolv.conf { 
           max_concurrent 1000 
        } 
        cache 30 
        loop 
        reload 
        loadbalance 
    } 
kind: ConfigMap 
metadata: 
  creationTimestamp: “2020-07-17T05:30:30Z” 
  name: coredns 
  namespace: kube-system 
  resourceVersion: “301258170” 
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns 
  uid: 96742cb2-14d5-40fe-9890-7b58bb1fc408 

 

The highlighted “forward” section in above configuration should now be modified to forward the external DNS queries to our dnsmasq container. 

After the modification the configmap should appear as shown below. 

# Please edit the object below. Lines beginning with a ‘#’ will be ignored, 
# and an empty file will abort the edit. If an error occurs while saving this file will be 
# reopened with the relevant failures. 
# 
apiVersion: v1 
data: 
  Corefile: | 
    .:53 { 
        errors 
        health { 
           lameduck 5s 
        } 
        ready 
        kubernetes cluster.local in-addr.arpa ip6.arpa { 
           pods insecure 
           fallthrough in-addr.arpa ip6.arpa 
           ttl 30 
        } 
        prometheus :9153 
        forward . 10.108.96.48 
        cache 30 
        loop 
        reload 
        loadbalance 
    } 
kind: ConfigMap 
metadata: 
  creationTimestamp: “2020-07-17T05:30:30Z” 
  name: coredns 
  namespace: kube-system 
  resourceVersion: “301258170” 
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns 
  uid: 96742cb2-14d5-40fe-9890-7b58bb1fc408 

 

 

Next save and exit from the editor so that above change is now reflected to coredns configmap. 

Finally, you can restart the coredns pods in kube-system namespace by deleting them. This will ensure that our modified configuration is loaded into the new instances of coredns pods that will come up after deleting the old replicas. 

 Here is the command to delete the current coredns pods. 

 $ kubectl delete pods –namespace kube-system –force –grace-period=0 $(kubectl get pods –namespace kube-system | grep coredns | awk -F ‘ ‘ ‘{print $1}’) 

 Now, let us run the same nslookup command on www.zinkworks.com from our ubuntu-debugger container to see if we are getting the expected response. 

$ kubectl exec -it ubuntu-debugger — /bin/bash 
root@ubuntu-debugger:/# nslookup example.zinkworks.com              
Server: 10.96.0.10 
Address: 10.96.0.10#53 
  
Name: example.zinkworks.com 
Address: 10.100.1.2 
  
root@ubuntu-debugger:/# nslookup www.zinkworks.com  
Server: 10.96.0.10 
Address: 10.96.0.10#53 
  
Name: www.zinkworks.com 
Address: 10.100.1.1 
As you notice in the above output now, all our external DNS queries are now first forwarded to the dnsmasq container we configured by default. We do not need to specify the dnsmasq service IP as the nameserver in our nslookup commands like we did in step 2 earlier. 

 As expected, the queries for www.zinkworks.com and example.zinkworks.com return the fake IPs we configured in dnsmasq.conf and hosts file in dnsmasq pod. 

To find out more about careers at Zinkworks click here.