k8s 07: Services

We’ll explore how to expose container services so that it can be accessible from other containers and outside network. Kubernetes service details is here in the official document, but in this entry I show you NodePort and ClusterIP only.

First of all, we need to change the cluster config again. Since we don’t have any overlay network yet, I’m going to configure each worker node to have unique container ip ranges so that they can be accessible via their IP address directly. This is, again, not ideal and it will be addressed later.

Prepare for container routing

1. Change IP address range for containers

[ worker-1 ]

You need to do the same thing on each worker node, just change the bip to respective ip address.

Deploy Kube-Proxy

1. Install Kube-proxy service

[ worker-1 ]

Test – Node Port service

With Node Port, the container can be accessed by its host IP address. So it can be accessed from anywhere(even the internet) as long as they have access to this host IP address.

1. Deploy nginx and expose that service with node port

[ controller-1 ]

2. Confirm the service is running

[ controller-1 ]

From this output, you can find the nginx container(its actual pod ip address is 10.200.1.2) is assigned node port 30070. Let’s check if it is the case.

3. Check if node port works

[ worker-2 ]

[ controller-1 ]

It is working.

It’s because the host (in this case worker-1) makes the iptables rule to forward the request on 30070 port to the container ip address and port internally. Because it’s happening only on the destination host, it can be accessed from anywhere.

[ worker-1 ]


Test – Cluster IP service

With cluster ip, the container can be accessed from only those hosts with kube-proxy(, which is in the same cluster). The controller node is not the exception and it should have kube-proxy running if it needs access to those cluster ip address.

1. Deploy nginx and expose that service with cluster ip

[ controller-1 ]

2. Confirm the service is running

[ controller-1 ]

From this output, you can find the nginx container(its actual pod ip address is 10.200.2.2) is assigned cluster ip 10.32.0.50 and port 8080. Let’s check if it is really working.

3. Check if cluster ip service works

[ worker-1 ]

[ controller-1 ]

Yes, it’s working, and it can be accessed only from worker nodes. Since the controller node doesn’t have kube-proxy configured, it doesn’t know how to (pre-)forward the request to the cluster-ip. Since this cluster ip is not known to the external network, it’s dropped eventually.

As the result of the kube-rpoxy modification on iptables, actually the original request is re-written to the request to the backend container ip address and port when it leaves the host.

[ controller-1 ] so it’s technically same it sends request to the container ip address. And it can be reachable from the controller node as well.

[ worker-1 ] Kube-proxy on this node changes the iptables, and it modifies the request.


Housekeeping

Delete the pod/services we made during this lab.