k8s 08: coreDNS

In this entry, I’m going to deploy DNS for kubernetes cluster. And for this purpose, I’m going to use CoreDNS, since this is the recommended DNS service over kube-dns.

I will deploy this a bit differently from other services, and this would be deployed as containers. Actually other kubernetes services(except kubelete, container runtime) can be deployed as containers rather than system services. And it would provide more resiliency to those services thanks to kubernetes self healing functionality. But in order to deploy those services, we need to know more about how kubernetes works(affinity, tolerance, static pods). I will cover this in a later article.

DNS in kubernetes works just the same as the usual DNS in open world. It resolves the human readable name to ip address. In real world, you need to add a record in DNS when you make a new service(web service and so on). In CoreDNS, it creates the dns entry once you create a service in kubernetes cluster.

Deploy CoreDNS

CoreDNS detail is in its official website.

1. Deploy CoreDNS containers in each node

In this setup, I use DaemonSet, which deploys one instance in each worker node. Sincethis DNS is meant to be used by containers, so I think it should be deployed as close as possible.

[ controller-1 ]

# first we create manifest file for coreDNS
# it's composed of three part:
# 1. ConfigMap - coredns requires corefile(config) when it launches. I use ConfigMap here so that it can be accessed from all nodes without other servers
# 2. DaemonSet - DaemonSets deploys one instances in each worker nodes. ConfigMap we made on step#1 is used here to launch instances
# 3. Service - It expose DNS service with cluster ip. 
shogo@controller-1:~/work$ cat << EOF >> coredns.yaml
> apiVersion: v1
> kind: ConfigMap
> metadata:
>   name: coredns
>   namespace: kube-system
> data:
>   Corefile: |
>     .:53 {
>           errors
>           health
>           kubernetes cluster.local in-addr.arpa ip6.arpa {
>               endpoint http://10.240.0.11:8080
>               pods insecure
>               upstream
>               fallthrough in-addr.arpa ip6.arpa
>           }
>           prometheus :9153
>           proxy . /etc/resolv.conf
>           cache 30
>           loop
>           reload
>           loadbalance
>   }
> ---
> apiVersion: apps/v1
> kind: DaemonSet
> metadata:
>   name: coredns
>   namespace: kube-system
>   labels:
>     app: dns
> spec:
>   minReadySeconds: 10
>   selector:
>     matchLabels:
>       app: dns
>   updateStrategy:
>     rollingUpdate:
>       maxUnavailable: 1
>   template:
>     metadata:
>       labels:
>         app: dns
>     spec:
>       dnsPolicy: Default
>       volumes:
>       - name: config-volume
>         configMap:
>         name: coredns
>         items:
>         - key: Corefile
>           path: Corefile
>       containers:
>       - name: coredns
>         image: coredns/coredns
>         args: [ "-conf", "/etc/coredns/Corefile" ]
>         volumeMounts:
>         - name: config-volume
>           mountPath: /etc/coredns
>           readOnly: true
>         ports:
>         - containerPort: 53
>           name: dns
>           protocol: UDP
>         - containerPort: 53
>           name: dns-tcp
>           protocol: TCP
>         - containerPort: 9153
>           name: metrics
>           protocol: TCP
> ---
> apiVersion: v1
> kind: Service
> metadata:
>   name: dns
>   namespace: kube-system
>   annotations:
>     prometheus.io/port: "9153"
>     prometheus.io/scrape: "true"
>   labels:
>     app: dns
>     kubernetes.io/cluster-service: "true"
>     kubernetes.io/name: "CoreDNS"
> spec:
>   type: ClusterIP
>   selector:
>     app: dns
>   clusterIP: 10.32.0.10
>   ports:
>   - name: dns
>     port: 53
>     protocol: UDP
>   - name: dns-tcp
>     port: 53
>     protocol: TCP
> EOF
shogo@controller-1:~/work$ kubectl apply -f coredns.yaml
configmap "coredns" created
daemonset.apps "coredns" created
service "dns" created

2. Confirm if CoreDNS is deployed

[ controller-1 ]

# confirm daemonset is deployed
k_shogo@controller-1:~/work$ kubectl get ds --all-namespaces
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system coredns 2 2 2 2 2 <none> 3m

# confirm instance is deployed exactly one instance in each node
k_shogo@controller-1:~/work$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system coredns-pthb2 1/1 Running 0 3m 10.200.2.2 worker-2
kube-system coredns-rwdzh 1/1 Running 0 3m 10.200.1.2 worker-1

# confirm service is correctly exposed
k_shogo@controller-1:~/work$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 6d
kube-system dns ClusterIP 10.32.0.10 <none> 53/UDP,53/TCP 4m

3. Confirm if CoreDNS is working correctly

[ controller-1 ]

# not reachable to the dns cluster ip, because master node doesn't have kube-proxy installed
k_shogo@controller-1:~/work$ nslookup -type=a www.google.com 10.32.0.10
;; connection timed out; no servers could be reached

[ worker-1 ]

# worker node can use cluster ip (and backed by coredns).
shogo@worker-1:~$ nslookup -type=a www.google.com 10.32.0.10
Server: 10.32.0.10
Address: 10.32.0.10#53

Non-authoritative answer:
Name: www.google.com
Address: 172.217.16.196

So we confirmed CoreDNS seems working OK, but we need to change some config on kubelet so that it can propagate this new DNS services to be used for containers.


Modify Kubelet

1. Reconfigure Kubelet system service

kubelet takes numbers of flags so that it can be passed to the docker and eventually it’s used for containers.

# By Default, the container inherit the /etc/resolv.conf of its host
# Its configured with APIPA, because I'm using GCE
/ # cat /etc/resolv.conf
nameserver 169.254.169.254
search c.python100pj.internal google.internal

[ each worker node ]

# Modify kubelet service
root@worker-1:~# cat /lib/systemd/system/kubelet.service 
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
[Service]
ExecStart=/usr/bin/kubelet --kubeconfig=/var/lib/kubelet/worker.kubeconfig --cluster-dns=10.32.0.10 --cluster-domain="cluster.local"
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
# restart system services
root@worker-1:~# systemctl daemon-reload
root@worker-1:~# systemctl restart kubelet

Test

So it should be working now. First I launch nginx pod with cluster ip service using the manifest file from the previous post.

[ controller-1 ]

k_shogo@controller-1:~/work$ kubectl get pods -o wide
NAME              READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-clusterip   1/1       Running   0          8s        10.200.2.2   worker-2
k_shogo@controller-1:~/work$ kubectl get svc
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
kubernetes        ClusterIP   10.32.0.1     <none>        443/TCP    7d
nginx-clusterip   ClusterIP   10.32.0.130   <none>        8080/TCP   17s

And launch a busybox. Please note as of Sept. 2018, the latest busybox image has some issues with nslookup, hence I’m using ver 1.28 here.

[ controller-1 ]

shogo@controller-1:~/work$ kubectl run busybox --image=busybox:1.28 -- sleep 3600
deployment.apps "busybox" created
shogo@controller-1:~/work$ kubectl get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP           NODE
busybox-7787dddfb9-lv2gk   1/1       Running   0          30s       10.200.1.2   worker-1
nginx-clusterip            1/1       Running   0          1m        10.200.2.2   worker-2

Check the busybox container, and login to it.

root@worker-1:~# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATU
S              PORTS               NAMES
d22e6db9a14d        8c811b4aec35           "sleep 3600"             54 seconds ago      Up 54
 seconds                           k8s_busybox_busybox-7787dddfb9-lv2gk_default_ef5677f6-c62c
-11e8-8c3b-42010af0000b_0
7cd0de6848e0        k8s.gcr.io/pause:3.1   "/pause"                 54 seconds ago      Up 54
 seconds                           k8s_POD_busybox-7787dddfb9-lv2gk_default_ef5677f6-c62c-11e
8-8c3b-42010af0000b_0
354a221b0ea6        coredns/coredns        "/coredns -conf /etc…"   40 minutes ago      Up 40
 minutes                           k8s_coredns_coredns-lxmpf_kube-system_e8f1abe4-c626-11e8-8
c3b-42010af0000b_1
e76e0cae7876        k8s.gcr.io/pause:3.1   "/pause"                 40 minutes ago      Up 40
 minutes                           k8s_POD_coredns-lxmpf_kube-system_e8f1abe4-c626-11e8-8c3b-
42010af0000b_1
root@worker-1:~# docker exec -it d22e6db9a14d sh
/ #

And confirm it can resolves the name, and it can access the service

root@worker-1:~# docker exec -it d22e6db9a14d sh
/ # nslookup kubernetes
Server:    10.32.0.10
Address 1: 10.32.0.10 dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-clusterip
Server:    10.32.0.10
Address 1: 10.32.0.10 dns.kube-system.svc.cluster.local
Name:      nginx-clusterip
Address 1: 10.32.0.130 nginx-clusterip.default.svc.cluster.local
/ # nslookup www.google.com
Server:    10.32.0.10
Address 1: 10.32.0.10 dns.kube-system.svc.cluster.local
Name:      www.google.com
Address 1: 2a00:1450:4001:815::2004 fra15s12-in-x04.1e100.net
Address 2: 216.58.205.228 fra15s24-in-f4.1e100.net
/ # wget -S --spider http://nginx-clusterip:8080
Connecting to nginx-clusterip:8080 (10.32.0.130:8080)
  HTTP/1.1 200 OK
  Server: nginx/1.15.4
  Date: Tue, 02 Oct 2018 10:25:57 GMT
  Content-Type: text/html
  Content-Length: 612
  Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
  Connection: close
  ETag: "5baa4e63-264"
  Accept-Ranges: bytes

We have working CoreDNS now!

Before leaving, don’t forget to do the house keeping.

shogo@controller-1:~/work$ kubectl delete -f nginx-clusterip.yaml 
pod "nginx-clusterip" deleted
service "nginx-clusterip" deleted
shogo@controller-1:~/work$ kubectl delete deploy/busybox
deployment.extensions "busybox" deleted

That’s all for today.