We’ll explore how to expose container services so that it can be accessible from other containers and outside network. Kubernetes service details is here in the official document, but in this entry I show you NodePort and ClusterIP only.
First of all, we need to change the cluster config again. Since we don’t have any overlay network yet, I’m going to configure each worker node to have unique container ip ranges so that they can be accessible via their IP address directly. This is, again, not ideal and it will be addressed later.
Prepare for container routing
1. Change IP address range for containers
[ worker-1 ]
root@worker-1:/home/k_shogo# docker network list NETWORK ID NAME DRIVER SCOPE 1a6da5939ab8 bridge bridge local f33806236084 host host local ba28deac8b49 none null local root@worker-1:/home/k_shogo# docker network inspect bridge | grep -i gateway "Gateway": "172.17.0.1" root@worker-1:/home/k_shogo# root@worker-1:/home/k_shogo# cat /etc/docker/daemon.json cat: /etc/docker/daemon.json: No such file or directory root@worker-1:/home/k_shogo# cat << EOF >> /etc/docker/daemon.json > { > "bip": "10.200.1.1/24" > } > EOF root@worker-1:/home/k_shogo# systemctl restart docker root@worker-1:/home/k_shogo# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2018-09-29 22:30:12 UTC; 6s ago Docs: https://docs.docker.com Main PID: 5910 (dockerd) Tasks: 29 Memory: 48.1M CPU: 453ms CGroup: /system.slice/docker.service ├─5910 /usr/bin/dockerd -H fd:// ├─5938 docker-containerd --config /var/run/docker/containerd/containerd.toml └─6107 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/694478b80a3684f5cc71fff3bc61032196a7c415b34b0c4a1b3a3eca7 Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.442861847Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.443043590Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420189cc0, CONNECTING" module=grpc Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.443812760Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420189cc0, READY" module=grpc Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.444012860Z" level=info msg="Loading containers: start." Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.645679205Z" level=info msg="Loading containers: done." Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.664604761Z" level=info msg="Docker daemon" commit=e68fc7a graphdriver(s)=overlay2 version=18.06.1-ce Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.665023812Z" level=info msg="Daemon has completed initialization" Sep 29 22:30:12 worker-1 systemd[1]: Started Docker Application Container Engine. Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12.677158468Z" level=info msg="API listen on /var/run/docker.sock" Sep 29 22:30:12 worker-1 dockerd[5910]: time="2018-09-29T22:30:12Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/694478b80a3684f5cc71fff3bc61032196a7c415b root@worker-1:/home/k_shogo# docker network list NETWORK ID NAME DRIVER SCOPE c4fdf5ec80a8 bridge bridge local f33806236084 host host local ba28deac8b49 none null local root@worker-1:/home/k_shogo# docker network inspect bridge | grep -i gateway "Gateway": "10.200.1.1" root@worker-1:/home/k_shogo# docker network inspect bridge | grep -i subnet "Subnet": "10.200.1.1/24",
You need to do the same thing on each worker node, just change the bip to respective ip address.
Deploy Kube-Proxy
1. Install Kube-proxy service
[ worker-1 ]
root@worker-1:/home/k_shogo/work# mv ./kubernetes/server/ root@worker-1:/home/k_shogo/work# mv ./kubernetes/server/bin/kube-proxy /usr/local/bin/ root@worker-1:/home/k_shogo/work# cat << EOF >> /etc/systemd/system/kube-proxy.service > [Unit] > Description=Kubernetes Proxy Server > Documentation=https://github.com/kubernetes/kubernetes > > [Service] > ExecStart=/usr/local/bin/kube-proxy --master=http://10.240.0.11:8080 > Restart=on-failure > RestartSec=5 > > [Install] > WantedBy=multi-user.target > EOF root@worker-1:/home/k_shogo/work# systemctl start kube-proxy root@worker-1:/home/k_shogo/work# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Server Loaded: loaded (/etc/systemd/system/kube-proxy.service; disabled; vendor preset: enabled) Active: active (running) since Sat 2018-09-29 16:40:51 UTC; 8s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 3058 (kube-proxy) Tasks: 0 Memory: 9.1M CPU: 107ms CGroup: /system.slice/kube-proxy.service ‣ 3058 /usr/local/bin/kube-proxy --master=http://10.240.0.11:8080 Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491012 3058 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491063 3058 conntrack.go:52] Setting nf_conntrack_max to 131072 Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491113 3058 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491134 3058 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491875 3058 config.go:202] Starting service config controller Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491900 3058 controller_utils.go:1019] Waiting for caches to sync for service config controller Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491936 3058 config.go:102] Starting endpoints config controller Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.491941 3058 controller_utils.go:1019] Waiting for caches to sync for endpoints config controller Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.592246 3058 controller_utils.go:1026] Caches are synced for endpoints config controller Sep 29 16:40:51 worker-1 kube-proxy[3058]: I0929 16:40:51.592935 3058 controller_utils.go:1026] Caches are synced for service config controller root@worker-1:/home/k_shogo/work# systemctl enable kube-proxy Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /etc/systemd/system/kube-proxy.service.
Test – Node Port service
With Node Port, the container can be accessed by its host IP address. So it can be accessed from anywhere(even the internet) as long as they have access to this host IP address.
1. Deploy nginx and expose that service with node port
[ controller-1 ]
k_shogo@controller-1:~/work$ cat nginx-nodeport.yaml apiVersion: v1 kind: Pod metadata: name: nginxtest-nodeport labels: app: nginx lab: nodeport spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginxtest-nodeport spec: type: NodePort selector: app: nginx lab: nodeport ports: - name: nginxtest-nodeport port: 80 targetPort: 80 k_shogo@controller-1:~/work$ kubectl apply -f nginx-nodeport.yaml pod "nginxtest-nodeport" created service "nginxtest-nodeport" created
2. Confirm the service is running
[ controller-1 ]
k_shogo@controller-1:~/work$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginxtest-nodeport 1/1 Running 0 1m 10.200.1.2 worker-1 k_shogo@controller-1:~/work$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 5d nginxtest-nodeport NodePort 10.32.0.89 <none> 80:30070/TCP 12s k_shogo@controller-1:~/work$ kubectl describe svc/nginxtest-nodeport Name: nginxtest-nodeport Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginxtest-nodeport","namespace":"default"},"spec":{"ports":[{"name":"nginxtest... Selector: app=nginx,lab=nodeport Type: NodePort IP: 10.32.0.89 Port: nginxtest-nodeport 80/TCP TargetPort: 80/TCP NodePort: nginxtest-nodeport 30070/TCP Endpoints: 10.200.1.2:80 Session Affinity: None External Traffic Policy: Cluster Events: <none>
From this output, you can find the nginx container(its actual pod ip address is 10.200.1.2) is assigned node port 30070. Let’s check if it is the case.
3. Check if node port works
[ worker-2 ]
root@worker-2:/home/k_shogo# curl http://worker-1:30070 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
[ controller-1 ]
shogo@controller-1:~/work$ curl http://10.240.0.21:30070 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
It is working.
It’s because the host (in this case worker-1) makes the iptables rule to forward the request on 30070 port to the container ip address and port internally. Because it’s happening only on the destination host, it can be accessed from anywhere.
[ worker-1 ]
root@worker-1:~# iptables-save | grep 30070 -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginxtest-nodeport:nginxtest-nodeport" -m tcp --dport 30070 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "default/nginxtest-nodeport:nginxtest-nodeport" -m tcp --dport 30070 -j KUBE-SVC-Y4FZLDNSKOURDUGQ root@worker-1:~# iptables -L KUBE-SVC-Y4FZLDNSKOURDUGQ -t nat Chain KUBE-SVC-Y4FZLDNSKOURDUGQ (2 references) target prot opt source destination KUBE-SEP-QXCV3YI6FCBDVWQL all -- anywhere anywhere /* default/nginxtest-nodeport:nginxtest-nodeport */ root@worker-1:~# iptables -L KUBE-SEP-QXCV3YI6FCBDVWQL -t nat Chain KUBE-SEP-QXCV3YI6FCBDVWQL (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.200.1.2 anywhere /* default/nginxtest-nodeport:nginxtest-nodeport */ DNAT tcp -- anywhere anywhere /* default/nginxtest-nodeport:nginxtest-nodeport */ tcp to:10.200.1.2:80
Test – Cluster IP service
With cluster ip, the container can be accessed from only those hosts with kube-proxy(, which is in the same cluster). The controller node is not the exception and it should have kube-proxy running if it needs access to those cluster ip address.
1. Deploy nginx and expose that service with cluster ip
[ controller-1 ]
shogo@controller-1:~/work$ cat nginx-clusterip.yaml apiVersion: v1 kind: Pod metadata: name: nginx-clusterip labels: app: nginx lab: clusterip spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-clusterip spec: type: ClusterIP selector: app: nginx lab: clusterip ports: - name: nginx-clusterip protocol: TCP port: 8080 targetPort: 80 shogo@controller-1:~/work$ kubectl apply -f nginx-clusterip.yaml pod "nginx-clusterip" created service "nginx-clusterip" created
2. Confirm the service is running
[ controller-1 ]
shogo@controller-1:~/work$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-clusterip 1/1 Running 0 36s 10.200.2.2 worker-2 nginxtest-nodeport 1/1 Running 0 15m 10.200.1.2 worker-1 shogo@controller-1:~/work$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 5d nginx-clusterip ClusterIP 10.32.0.50 <none> 8080/TCP 45s nginxtest-nodeport NodePort 10.32.0.89 <none> 80:30070/TCP 15m shogo@controller-1:~/work$ kubectl describe svc/nginx-clusterip Name: nginx-clusterip Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"nginx-clusterip","namespace":"default"},"spec":{"ports":[{"name":"nginx-cluste... Selector: app=nginx,lab=clusterip Type: ClusterIP IP: 10.32.0.50 Port: nginx-clusterip 8080/TCP TargetPort: 80/TCP Endpoints: 10.200.2.2:80 Session Affinity: None Events: <none>
From this output, you can find the nginx container(its actual pod ip address is 10.200.2.2) is assigned cluster ip 10.32.0.50 and port 8080. Let’s check if it is really working.
3. Check if cluster ip service works
[ worker-1 ]
root@worker-1:~# curl http://10.32.0.50:8080 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
[ controller-1 ]
shogo@controller-1:~/work$ curl http://10.32.0.50:8080 curl: (7) Failed to connect to 10.32.0.50 port 8080: Connection timed out
Yes, it’s working, and it can be accessed only from worker nodes. Since the controller node doesn’t have kube-proxy configured, it doesn’t know how to (pre-)forward the request to the cluster-ip. Since this cluster ip is not known to the external network, it’s dropped eventually.
As the result of the kube-rpoxy modification on iptables, actually the original request is re-written to the request to the backend container ip address and port when it leaves the host.
[ controller-1 ] so it’s technically same it sends request to the container ip address. And it can be reachable from the controller node as well.
shogo@controller-1:~/work$ curl http://10.200.2.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
[ worker-1 ] Kube-proxy on this node changes the iptables, and it modifies the request.
root@worker-1:~# iptables-save | grep 10.32.0.50 -A KUBE-SERVICES -d 10.32.0.50/32 -p tcp -m comment --comment "default/nginx-clusterip:nginx-clusterip cluster IP" -m tcp --dport 8080 -j KUBE-SVC-A4IMIIJJXTXEHWFI root@worker-1:~# iptables -L KUBE-SVC-A4IMIIJJXTXEHWFI -t nat Chain KUBE-SVC-A4IMIIJJXTXEHWFI (1 references) target prot opt source destination KUBE-SEP-2V4CHBMLT6MH5RD7 all -- anywhere anywhere /* default/nginx-clusterip:nginx-clusterip */ root@worker-1:~# iptables -L KUBE-SEP-2V4CHBMLT6MH5RD7 -t nat Chain KUBE-SEP-2V4CHBMLT6MH5RD7 (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.200.2.2 anywhere /* default/nginx-clusterip:nginx-clusterip */ DNAT tcp -- anywhere anywhere /* default/nginx-clusterip:nginx-clusterip */ tcp to:10.200.2.2:80
Housekeeping
Delete the pod/services we made during this lab.
shogo@controller-1:~/work$ kubectl delete -f nginx-nodeport.yaml pod "nginxtest-nodeport" deleted service "nginxtest-nodeport" deleted shogo@controller-1:~/work$ kubectl delete -f nginx-clusterip.yaml pod "nginx-clusterip" deleted service "nginx-clusterip" deleted